identity theft
- North America > United States > Virginia (0.04)
- North America > United States > Texas > Travis County > Austin (0.04)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.04)
- (3 more...)
- Transportation > Ground > Road (1.00)
- Media > News (1.00)
- Government > Regional Government > North America Government > United States Government (0.95)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.71)
Identity Theft in AI Conference Peer Review
Academia heavily relies on trust. This trust-based system, however, creates a significant vulnerability: identity theft. In this Opinion column, we describe newly uncovered cases of identity theft within the scientific peer-review process within the research area of artificial intelligence (AI), involving modus operandi that could also disrupt other academic procedures. We begin by outlining the peer-review process, focusing on scientific conferences since they are the most prominent venues of publication in computer science. Peer review is foundational to scientific inquiry, relying on researchers to voluntarily apply their expertise in evaluating scientific papers.
- Law Enforcement & Public Safety > Fraud (0.83)
- Information Technology > Security & Privacy (0.83)
A shadowy L.A. crime ring is hijacking the IDs of foreign scholars, fraud expert says
Things to Do in L.A. A shadowy L.A. crime ring is hijacking the IDs of foreign scholars, fraud expert says This is read by an automated voice. Please report any issues or inconsistencies here . An identity theft ring believed to be based in the Burbank area is stealing Social Security Numbers of former foreign scholars. Private fraud investigators suspect the operation is connected to Armenian organized crime groups known for sophisticated financial crimes. Using apartments in the San Fernando Valley and Glendale area, a shadowy group of identity thieves has been quietly exploiting a new kind of victim -- foreign scholars who left the U.S. years ago but whose Social Security numbers still linger in American databases, according to a cybercrime expert.
- North America > United States > California > Los Angeles County > Los Angeles (0.16)
- North America > United States > California > Kern County (0.04)
- North America > Mexico (0.04)
- (9 more...)
Deep Research is the New Analytics System: Towards Building the Runtime for AI-Driven Analytics
With advances in large language models (LLMs), researchers are creating new systems that can perform AI-driven analytics over large unstructured datasets. Recent work has explored executing such analytics queries using semantic operators -- a declarative set of AI-powered data transformations with natural language specifications. However, even when optimized, these operators can be expensive to execute on millions of records and their iterator execution semantics make them ill-suited for interactive data analytics tasks. In another line of work, Deep Research systems have demonstrated an ability to answer natural language question(s) over large datasets. These systems use one or more LLM agent(s) to plan their execution, process the dataset(s), and iteratively refine their answer. However, these systems do not explicitly optimize their query plans which can lead to poor plan execution. In order for AI-driven analytics to excel, we need a runtime which combines the optimized execution of semantic operators with the flexibility and more dynamic execution of Deep Research systems. As a first step towards this vision, we build a prototype which enables Deep Research agents to write and execute optimized semantic operator programs. We evaluate our prototype and demonstrate that it can outperform a handcrafted semantic operator program and open Deep Research systems on two basic queries. Compared to a standard open Deep Research agent, our prototype achieves up to 1.95x better F1-score. Furthermore, even if we give the agent access to semantic operators as tools, our prototype still achieves cost and runtime savings of up to 76.8% and 72.7% thanks to its optimized execution.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > New York > New York County > New York City (0.05)
- (4 more...)
Identity Theft in AI Conference Peer Review
Shah, Nihar B., Bok, Melisa, Liu, Xukun, McCallum, Andrew
Abstract: We discuss newly uncovered cases of identity theft in the scientific peer-review process within artificial intelligence (AI) research, with broader implications for other academic procedures. We detail how dishonest researchers exploit the peer-review system by creating fraudulent reviewer profiles to manipulate paper evaluations, leveraging weaknesses in reviewer recruitment workflows and identity verification processes. The findings highlight the critical need for stronger safeguards against identity theft in peer review and academia at large, and to this end, we also propose mitigating strategies. Academia heavily relies on trust. This trust-based system, however, creates a significant vulnerability: identity theft.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- North America > United States > Massachusetts (0.04)
- Law Enforcement & Public Safety > Fraud (1.00)
- Information Technology > Security & Privacy (1.00)
Not My Voice! A Taxonomy of Ethical and Safety Harms of Speech Generators
Hutiri, Wiebke, Papakyriakopoulos, Oresiti, Xiang, Alice
The rapid and wide-scale adoption of AI to generate human speech poses a range of significant ethical and safety risks to society that need to be addressed. For example, a growing number of speech generation incidents are associated with swatting attacks in the United States, where anonymous perpetrators create synthetic voices that call police officers to close down schools and hospitals, or to violently gain access to innocent citizens' homes. Incidents like this demonstrate that multimodal generative AI risks and harms do not exist in isolation, but arise from the interactions of multiple stakeholders and technical AI systems. In this paper we analyse speech generation incidents to study how patterns of specific harms arise. We find that specific harms can be categorised according to the exposure of affected individuals, that is to say whether they are a subject of, interact with, suffer due to, or are excluded from speech generation systems. Similarly, specific harms are also a consequence of the motives of the creators and deployers of the systems. Based on these insights we propose a conceptual framework for modelling pathways to ethical and safety harms of AI, which we use to develop a taxonomy of harms of speech generators. Our relational approach captures the complexity of risks and harms in sociotechnical AI systems, and yields an extensible taxonomy that can support appropriate policy interventions and decision making for responsible multimodal model development and release of speech generators.
- Africa > Sudan (0.14)
- Asia > South Korea (0.14)
- South America > Venezuela (0.04)
- (13 more...)
- Media (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law (1.00)
- (4 more...)
Training Machine Learning Algorithms to Recognize Fraud Patterns
Machine learning is increasing in usage across several industries, from manufacturing and engineering to retail and marketing. While ML has many applications, the average person has yet to understand it fully. In simple terms, ML allows software applications to continuously improve over time and better predict outcomes with little to no human intervention. A machine learning algorithm is a program the AI system uses to conduct a task. Their algorithms can often analyze historical datasets to predict valuable outcomes.
- Law Enforcement & Public Safety > Fraud (1.00)
- Information Technology > Security & Privacy (1.00)
Deepfakes aren't going away: Future-proofing digital identity
Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Deepfakes aren't new, but this AI-powered technology has emerged as a pervasive threat in spreading misinformation and increasing identity fraud. The pandemic made matters worse by creating the ideal conditions for bad actors to take advantage of organizations' and consumers' blindspots, further exacerbating fraud and identity theft. Fraud stemming from deepfakes spiked during the pandemic, and poses significant challenges for financial institutions and fintechs that need to accurately authenticate and verify identities.
- North America > United States > California > San Francisco County > San Francisco (0.16)
- Asia > China > Hong Kong (0.05)
Evil twins and digital elves: How the metaverse will create new forms of fraud and deception
We humans are obsessed with technologies that blur the boundaries between what is real and what is fabricated. In fact, two of the hottest fields right now are defined by how effectively they can deceive us: the metaverse and artificial intelligence. When it comes to the metaverse, the goal of VR and AR technology is to fool the senses, making computer-generated content seem like real-world experiences. On the AI front, Alan Turing famously threw down the gauntlet, stating that the ultimate test of a human-level AI would be to successfully fool us into believing that it was human. Whether you're looking forward to these technologies or not, their power of deception will soon transform society.
Get ready for your evil twin
We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Earlier this year a chilling academic study was published by researchers at Lancaster University and UC Berkeley. Using a sophisticated form of AI known as a GAN (Generative Adversarial Network) they created artificial human faces (i.e. They discovered that this type of AI technology has become so effective, we humans can no longer tell the difference between real people and virtual people (or "veeple" as I call them). You see, they also asked their test subjects to rate the "trustworthiness" of each face and discovered that consumers find AI-generated faces to be significantly more trustworthy than real faces.